47 research outputs found

    A UML and Petri Nets Integrated Modeling Method for Business Processes in Virtual Enterprises

    Get PDF
    Abstract Virtual Enterprise is an important organization pattern for future enterprises, one of whose major functions is the distributed and parallel business process execution. This paper aims at the study on business process modeling in virtual enterprises. Based on the object-oriented description of business processes in virtual enterprises, we propose a UML and Petri nets integrated modeling method for business processes in virtual enterprises. The method provides an integrative framework supporting requirement description, model specification and design, model analysis and simulation, and model implementation

    Enhanced Meta-Learning for Cross-lingual Named Entity Recognition with Minimal Resources

    Full text link
    For languages with no annotated resources, transferring knowledge from rich-resource languages is an effective solution for named entity recognition (NER). While all existing methods directly transfer from source-learned model to a target language, in this paper, we propose to fine-tune the learned model with a few similar examples given a test case, which could benefit the prediction by leveraging the structural and semantic information conveyed in such similar examples. To this end, we present a meta-learning algorithm to find a good model parameter initialization that could fast adapt to the given test case and propose to construct multiple pseudo-NER tasks for meta-training by computing sentence similarities. To further improve the model's generalization ability across different languages, we introduce a masking scheme and augment the loss function with an additional maximum term during meta-training. We conduct extensive experiments on cross-lingual named entity recognition with minimal resources over five target languages. The results show that our approach significantly outperforms existing state-of-the-art methods across the board.Comment: This paper is accepted by AAAI2020. Code is available at https://github.com/microsoft/vert-papers/tree/master/papers/Meta-Cros

    Estimates of daily ground-level NO2 concentrations in China based on big data and machine learning approaches

    Full text link
    Nitrogen dioxide (NO2) is one of the most important atmospheric pollutants. However, current ground-level NO2 concentration data are lack of either high-resolution coverage or full coverage national wide, due to the poor quality of source data and the computing power of the models. To our knowledge, this study is the first to estimate the ground-level NO2 concentration in China with national coverage as well as relatively high spatiotemporal resolution (0.25 degree; daily intervals) over the newest past 6 years (2013-2018). We advanced a Random Forest model integrated K-means (RF-K) for the estimates with multi-source parameters. Besides meteorological parameters, satellite retrievals parameters, we also, for the first time, introduce socio-economic parameters to assess the impact by human activities. The results show that: (1) the RF-K model we developed shows better prediction performance than other models, with cross-validation R2 = 0.64 (MAPE = 34.78%). (2) The annual average concentration of NO2 in China showed a weak increasing trend . While in the economic zones such as Beijing-Tianjin-Hebei region, Yangtze River Delta, and Pearl River Delta, the NO2 concentration there even decreased or remained unchanged, especially in spring. Our dataset has verified that pollutant controlling targets have been achieved in these areas. With mapping daily nationwide ground-level NO2 concentrations, this study provides timely data with high quality for air quality management for China. We provide a universal model framework to quickly generate a timely national atmospheric pollutants concentration map with a high spatial-temporal resolution, based on improved machine learning methods

    Carbon Monitor Cities, near-real-time daily estimates of CO2 emissions from 1500 cities worldwide

    Full text link
    Building on near-real-time and spatially explicit estimates of daily carbon dioxide (CO2) emissions, here we present and analyze a new city-level dataset of fossil fuel and cement emissions. Carbon Monitor Cities provides daily, city-level estimates of emissions from January 2019 through December 2021 for 1500 cities in 46 countries, and disaggregates five sectors: power generation, residential (buildings), industry, ground transportation, and aviation. The goal of this dataset is to improve the timeliness and temporal resolution of city-level emission inventories and includes estimates for both functional urban areas and city administrative areas that are consistent with global and regional totals. Comparisons with other datasets (i.e. CEADs, MEIC, Vulcan, and CDP) were performed, and we estimate the overall uncertainty to be 21.7%. Carbon Monitor Cities is a near-real-time, city-level emission dataset that includes cities around the world, including the first estimates for many cities in low-income countries

    Evaluation of a computer-aided diagnostic model for corneal diseases by analyzing in vivo confocal microscopy images

    Get PDF
    ObjectiveIn order to automatically and rapidly recognize the layers of corneal images using in vivo confocal microscopy (IVCM) and classify them into normal and abnormal images, a computer-aided diagnostic model was developed and tested based on deep learning to reduce physicians’ workload.MethodsA total of 19,612 corneal images were retrospectively collected from 423 patients who underwent IVCM between January 2021 and August 2022 from Renmin Hospital of Wuhan University (Wuhan, China) and Zhongnan Hospital of Wuhan University (Wuhan, China). Images were then reviewed and categorized by three corneal specialists before training and testing the models, including the layer recognition model (epithelium, bowman’s membrane, stroma, and endothelium) and diagnostic model, to identify the layers of corneal images and distinguish normal images from abnormal images. Totally, 580 database-independent IVCM images were used in a human-machine competition to assess the speed and accuracy of image recognition by 4 ophthalmologists and artificial intelligence (AI). To evaluate the efficacy of the model, 8 trainees were employed to recognize these 580 images both with and without model assistance, and the results of the two evaluations were analyzed to explore the effects of model assistance.ResultsThe accuracy of the model reached 0.914, 0.957, 0.967, and 0.950 for the recognition of 4 layers of epithelium, bowman’s membrane, stroma, and endothelium in the internal test dataset, respectively, and it was 0.961, 0.932, 0.945, and 0.959 for the recognition of normal/abnormal images at each layer, respectively. In the external test dataset, the accuracy of the recognition of corneal layers was 0.960, 0.965, 0.966, and 0.964, respectively, and the accuracy of normal/abnormal image recognition was 0.983, 0.972, 0.940, and 0.982, respectively. In the human-machine competition, the model achieved an accuracy of 0.929, which was similar to that of specialists and higher than that of senior physicians, and the recognition speed was 237 times faster than that of specialists. With model assistance, the accuracy of trainees increased from 0.712 to 0.886.ConclusionA computer-aided diagnostic model was developed for IVCM images based on deep learning, which rapidly recognized the layers of corneal images and classified them as normal and abnormal. This model can increase the efficacy of clinical diagnosis and assist physicians in training and learning for clinical purposes

    Maximizing Your Wealth

    No full text
    The objective of this project was to construct an optimal portfolio for small investors
    corecore